biologically plausible learning rule
Supplementary Material GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
The GAIT -prop and ITP targets are implemented as a weak perturbation of the forward pass. The table below presents the relevant parameters.Parameter V alue Learning Rate of Adam Optimiser {10 The results report peak and final (end of training) accuracy on the training set (organise'peak / final'). Parameters shown in bold were chosen and used for all results presented in the main paper. We find that target propagation often does best when early-stopping is implemented to'catch' this peak, unlike the other two algorithms which have asymptotic In the main paper, we showed that GAIT -propagation produces networks with final training/test accuracies which are indistinguishable from those produced by backpropagation of error. The performance of deep multi-layer perceptrons trained by BP, and GAIT -prop.
- North America > Canada (0.05)
- Europe > Netherlands > Gelderland > Nijmegen (0.05)
Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
I believe this paper makes a meaningful contribution to this line of work and have changed my score accordingly to support acceptance. I do have a few comments that I hope you will consider as you prepare a final version of this paper, mainly coming from a neuroscience perspective. While the method described in this paper advances the family of target prop-related models and may serve as a foundation for future work in bio-plausible learning models, I don't think it is appropriate to describe it as more biologically plausible than backpropagation. One of the commonly cited biologically implausible features of backpropagation (weight symmetry) is replaced here by an equally implausible mechanism (perfect inverse models). It is true that bio-plausible ways of approximating inverses may exist, but there are also proposals for bio-plausible ways of maintaining weight symmetry (e.g.
Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
This paper presents a biologically plausible learning rule as an alternative to standard back-propagation. This is a heavily studied area in ML, with strong interest from both the ML and computational neuroscience communities. The reviewers agreed that this work presents an exciting and important contribution over the existing literature on this problem. There was extensive discussion between reviewers, with two reviewers championing the paper for acceptance. The lower scoring reviewers cited the empirical evaluation as a weakness of the paper, while others argued that the idea on its own was sufficiently interesting to the community.
GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass.
Biologically Plausible Learning Rules for Perceptual Systems that Maximize Mutual Information
Consider a neural perceptual system being exposed to an external environment. The system has certain internal state to represent external events. There is strong behavioral and neural evidence(e.g., Ernst and Banks, 2002; Gabbiani and Koch, 1998) that the internal representation is intrinsically probabilistic(Knill and Pouget, 2004), in line with the statistical properties of the environment. We mark the input signal as x. The perceptual representation would be a probability distribution conditional on x, denoted as p(y x). According to the Infomax principle (Attneave, 1954; Barlow et al., 1961; Linsker, 1988), the system's goal is to maximize the mutual information (MI) between the input x and the output (neuronal response) y, which can be written as max I(x;y), (1.1)
Are skip connections necessary for biologically plausible learning rules?
Im, Daniel Jiwoong, Patil, Rutuja, Branson, Kristin
Backpropagation is the workhorse of deep learning, however, several other biologically-motivated learning rules have been introduced, such as random feedback alignment and difference target propagation. None of these methods have produced a competitive performance against backpropagation. In this paper, we show that biologically-motivated learning rules with skip connections between intermediate layers can perform as well as backpropagation on the MNIST dataset and are robust to various sets of hyper-parameters.